12 research outputs found

    Robust Door Operation with the Toyota Human Support Robot. Robotic perception, manipulation and learning

    Get PDF
    Robots are progressively spreading to urban, social and assistive domains. Service robots operating in domestic environments typically face a variety of objects they have to deal with to fulfill their tasks. Some of these objects are articulated such as cabinet doors and drawers. The ability to deal with such objects is relevant, as for example navigate between rooms or assist humans in their mobility. The exploration of this task rises interesting questions in some of the main robotic threads such as perception, manipulation and learning. In this work a general framework to robustly operate different types of doors with a mobile manipulator robot is proposed. To push the state-of-the-art, a novel algorithm, that fuses a Convolutional Neural Network with point cloud processing for estimating the end-effector grasping pose in real-time for multiple handles simultaneously from single RGB-D images, is proposed. Also, a Bayesian framework that embodies the robot with the ability to learn the kinematic model of the door from observations of its motion, as well as from previous experiences or human demonstrations. Combining this probabilistic approach with state-of-the-art motion planninOutgoin

    Robust and adaptive door operation with a mobile robot

    Get PDF
    The version of record is available online at: http://dx.doi.org/10.1007/s11370-021-00366-7The ability to deal with articulated objects is very important for robots assisting humans. In this work, a framework to robustly and adaptively operate common doors, using an autonomous mobile manipulator, is proposed. To push forward the state of the art in robustness and speed performance, we devise a novel algorithm that fuses a convolutional neural network with efficient point cloud processing. This advancement enables real-time grasping pose estimation for multiple handles from RGB-D images, providing a speed up improvement for assistive human-centered applications. In addition, we propose a versatile Bayesian framework that endows the robot with the ability to infer the door kinematic model from observations of its motion and learn from previous experiences or human demonstrations. Combining these algorithms with a Task Space Region motion planner, we achieve an efficient door operation regardless of the kinematic model. We validate our framework with real-world experiments using the Toyota human support robot.Peer ReviewedPostprint (author's final draft

    Database for 3D human pose estimation from single depth images

    Get PDF
    This work is part of the project I-­-DRESS (Assistive interactive robotic system for support in dressing). The specific objective is the detection of human body postures and the tracking of their movements. To this end, this work aims to create the image database needed for the training of the algorithms of pose estimation for the artificial vision of the robotic system, based on the depth images obtained by a sensor Time-­-of-­-Flight (ToF) depth camera type, such as the incorporated by the Kinect One (Kinect v2) device.Peer ReviewedPreprin

    ROS wrapper for real-time multi-person pose estimation with a single camera

    Get PDF
    For robots to be deployable in human occupied environments, the robots must have human-awareness and generate human-aware behaviors and policies. OpenPose is a library for real-time multi-person keypoint detection. We have considered the implementation of a ROS package that would allow the estimation of 2d pose from simple RGB images, for which we have introduced a ROS wrapper that automatically recovers the pose of several people from a single camera using OpenPose. Additionally, a ROS node to obtain 3d pose estimation from the initial 2d pose estimation when a depth image is synchronized with the RGB image (RGB-D image, such as with a Kinect camera) has been developed. This aim is attained projecting the 2d pose estimation onto the point-cloud of the depth image.Peer ReviewedPreprin

    Robot Learning from Demonstration with Gaussian Processes

    Get PDF
    Autonomous systems are no longer confined to factories, but they are progressively spreading to urban, social, and assistive domains. However, in order to become handy co-workers and helpful assistants, robots must be endowed with quite different abilities than their industrial ancestors, and a lot of additional research is still required. A key challenge in intelligent robotics is creating autonomous agents that are capable of directly interacting with the world around them to achieve their goals. Learning plays a central role in intelligent autonomous systems, as the real world contains too much uncertainty and a robot must be capable of dealing with environments that neither it nor its designers have foreseen. Learning from demonstration is a promising paradigm that allows robots to learn complex tasks that cannot be easily scripted, but can be demonstrated by a human teacher. In this thesis, we develop complete learning from demonstration framework. We first present a whole-body teleoperation approach for human motion transfer, which allows a teacher equipped with a motion capture system to intuitively provide demonstrations to a robot. Then, to learn a generalized rep- resentation of the task which can be adapted to unforeseen scenarios, we unify in a single, entirely Gaussian-Process-based formulation, the main components of a state-of-the-art method. We evaluate our approach through a series of real-world experiments with the manipulator robot TIAGo, achieving satisfactory results. Finally, we must be aware that we are in a technological inflection point in which robots are developing the capacity to greatly increase their cognitive and physical capabilities. This will raise complex issues regarding the economy, ethics, law, and the environment, which we provide an overview of in this thesis. Intelligent robotics offer an unimaginable spectrum of possibilities, with the appropriate attention and the right policies they open the doors to new sources of value and growth. However, it is in the hands of scientists and engineers to not look away and anticipate the potential impacts in order to turn robots into the motor of global prosperityObjectius de Desenvolupament Sostenible::9 - Indústria, Innovació i Infraestructura::9.5 - Augmentar la investigació científica i millorar la capacitat tecnològica dels sectors industrials de tots els països, en particular els països en desenvolupament, entre d’altres maneres fomentant la innovació i augmentant substancialment, d’aquí al 2030, el nombre de persones que treballen en el camp de la investigació i el desenvolupa­ment per cada milió d’habitants, així com la despesa en investigació i desenvolupament dels sectors públic i priva

    Robot Learning from Demonstration with Gaussian Processes

    Get PDF
    Autonomous systems are no longer confined to factories, but they are progressively spreading to urban, social, and assistive domains. However, in order to become handy co-workers and helpful assistants, robots must be endowed with quite different abilities than their industrial ancestors, and a lot of additional research is still required. A key challenge in intelligent robotics is creating autonomous agents that are capable of directly interacting with the world around them to achieve their goals. Learning plays a central role in intelligent autonomous systems, as the real world contains too much uncertainty and a robot must be capable of dealing with environments that neither it nor its designers have foreseen. Learning from demonstration is a promising paradigm that allows robots to learn complex tasks that cannot be easily scripted, but can be demonstrated by a human teacher. In this thesis, we develop complete learning from demonstration framework. We first present a whole-body teleoperation approach for human motion transfer, which allows a teacher equipped with a motion capture system to intuitively provide demonstrations to a robot. Then, to learn a generalized rep- resentation of the task which can be adapted to unforeseen scenarios, we unify in a single, entirely Gaussian-Process-based formulation, the main components of a state-of-the-art method. We evaluate our approach through a series of real-world experiments with the manipulator robot TIAGo, achieving satisfactory results. Finally, we must be aware that we are in a technological inflection point in which robots are developing the capacity to greatly increase their cognitive and physical capabilities. This will raise complex issues regarding the economy, ethics, law, and the environment, which we provide an overview of in this thesis. Intelligent robotics offer an unimaginable spectrum of possibilities, with the appropriate attention and the right policies they open the doors to new sources of value and growth. However, it is in the hands of scientists and engineers to not look away and anticipate the potential impacts in order to turn robots into the motor of global prosperityObjectius de Desenvolupament Sostenible::9 - Indústria, Innovació i Infraestructura::9.5 - Augmentar la investigació científica i millorar la capacitat tecnològica dels sectors industrials de tots els països, en particular els països en desenvolupament, entre d’altres maneres fomentant la innovació i augmentant substancialment, d’aquí al 2030, el nombre de persones que treballen en el camp de la investigació i el desenvolupa­ment per cada milió d’habitants, així com la despesa en investigació i desenvolupament dels sectors públic i priva

    Robust Door Operation with the Toyota Human Support Robot. Robotic perception, manipulation and learning

    No full text
    Robots are progressively spreading to urban, social and assistive domains. Service robots operating in domestic environments typically face a variety of objects they have to deal with to fulfill their tasks. Some of these objects are articulated such as cabinet doors and drawers. The ability to deal with such objects is relevant, as for example navigate between rooms or assist humans in their mobility. The exploration of this task rises interesting questions in some of the main robotic threads such as perception, manipulation and learning. In this work a general framework to robustly operate different types of doors with a mobile manipulator robot is proposed. To push the state-of-the-art, a novel algorithm, that fuses a Convolutional Neural Network with point cloud processing for estimating the end-effector grasping pose in real-time for multiple handles simultaneously from single RGB-D images, is proposed. Also, a Bayesian framework that embodies the robot with the ability to learn the kinematic model of the door from observations of its motion, as well as from previous experiences or human demonstrations. Combining this probabilistic approach with state-of-the-art motion planninOutgoin

    Database for 3D human pose estimation from single depth images

    No full text
    This work is part of the project I-­-DRESS (Assistive interactive robotic system for support in dressing). The specific objective is the detection of human body postures and the tracking of their movements. To this end, this work aims to create the image database needed for the training of the algorithms of pose estimation for the artificial vision of the robotic system, based on the depth images obtained by a sensor Time-­-of-­-Flight (ToF) depth camera type, such as the incorporated by the Kinect One (Kinect v2) device.Peer Reviewe

    Database for 3D human pose estimation from single depth images

    No full text
    This work is part of the project I-­-DRESS (Assistive interactive robotic system for support in dressing). The specific objective is the detection of human body postures and the tracking of their movements. To this end, this work aims to create the image database needed for the training of the algorithms of pose estimation for the artificial vision of the robotic system, based on the depth images obtained by a sensor Time-­-of-­-Flight (ToF) depth camera type, such as the incorporated by the Kinect One (Kinect v2) device.Peer Reviewe

    ROS wrapper for real-time multi-person pose estimation with a single camera

    No full text
    For robots to be deployable in human occupied environments, the robots must have human-awareness and generate human-aware behaviors and policies. OpenPose is a library for real-time multi-person keypoint detection. We have considered the implementation of a ROS package that would allow the estimation of 2d pose from simple RGB images, for which we have introduced a ROS wrapper that automatically recovers the pose of several people from a single camera using OpenPose. Additionally, a ROS node to obtain 3d pose estimation from the initial 2d pose estimation when a depth image is synchronized with the RGB image (RGB-D image, such as with a Kinect camera) has been developed. This aim is attained projecting the 2d pose estimation onto the point-cloud of the depth image.Peer Reviewe
    corecore